在本文中,我们解决了包含人脸和声音的视频中的唇彩同步问题。我们的方法是基于确定视频中的嘴唇运动和声音是否同步,具体取决于其视听对应得分。我们提出了一个基于视听的跨模式变压器模型,该模型在标准的唇读语音基准数据集LRS2上胜过音频视频同步任务中的几个基线模型。尽管现有的方法主要集中在语音视频中的唇部同步上,但我们也考虑了歌声的特殊情况。由于持续的元音声音,唱歌声音是同步的更具挑战性的用例。我们还研究了在唱歌语音的背景下在语音数据集中训练的LIP同步模型的相关性。最后,我们使用在唱歌语音分离任务中通过唇部同步模型学到的冷冻视觉特征,以优于训练有素的端到端的基线音频视觉模型。演示,源代码和预训练的模型可在https://ipcv.github.io/vocalist/上找到。
translated by 谷歌翻译
本文提出了一种语音分离的视听方法,在两种情况下以低潜伏期产生最先进的结果:语音和唱歌声音。该模型基于两个阶段网络。运动提示是通过轻巧的图形卷积网络获得的,该网络处理面对地标。然后,将音频和运动功能馈送到视听变压器中,该变压器对隔离目标源产生相当好的估计。在第二阶段,仅使用音频网络增强了主导语音。我们提出了不同的消融研究和与最新方法的比较。最后,我们探讨了在演唱语音分离的任务中训练训练语音分离的模型的可传递性。https://ipcv.github.io/vovit/可用演示,代码和权重
translated by 谷歌翻译
Erroneous correspondences between samples and their respective channel or target commonly arise in several real-world applications. For instance, whole-brain calcium imaging of freely moving organisms, multiple target tracking or multi-person contactless vital sign monitoring may be severely affected by mismatched sample-channel assignments. To systematically address this fundamental problem, we pose it as a signal reconstruction problem where we have lost correspondences between the samples and their respective channels. We show that under the assumption that the signals of interest admit a sparse representation over an overcomplete dictionary, unique signal recovery is possible. Our derivations reveal that the problem is equivalent to a structured unlabeled sensing problem without precise knowledge of the sensing matrix. Unfortunately, existing methods are neither robust to errors in the regressors nor do they exploit the structure of the problem. Therefore, we propose a novel robust two-step approach for the reconstruction of shuffled sparse signals. The performance and robustness of the proposed approach is illustrated in an application of whole-brain calcium imaging in computational neuroscience. The proposed framework can be generalized to sparse signal representations other than the ones considered in this work to be applied in a variety of real-world problems with imprecise measurement or channel assignment.
translated by 谷歌翻译
Feature selection is of great importance in Machine Learning, where it can be used to reduce the dimensionality of classification, ranking and prediction problems. The removal of redundant and noisy features can improve both the accuracy and scalability of the trained models. However, feature selection is a computationally expensive task with a solution space that grows combinatorically. In this work, we consider in particular a quadratic feature selection problem that can be tackled with the Quantum Approximate Optimization Algorithm (QAOA), already employed in combinatorial optimization. First we represent the feature selection problem with the QUBO formulation, which is then mapped to an Ising spin Hamiltonian. Then we apply QAOA with the goal of finding the ground state of this Hamiltonian, which corresponds to the optimal selection of features. In our experiments, we consider seven different real-world datasets with dimensionality up to 21 and run QAOA on both a quantum simulator and, for small datasets, the 7-qubit IBM (ibm-perth) quantum computer. We use the set of selected features to train a classification model and evaluate its accuracy. Our analysis shows that it is possible to tackle the feature selection problem with QAOA and that currently available quantum devices can be used effectively. Future studies could test a wider range of classification models as well as improve the effectiveness of QAOA by exploring better performing optimizers for its classical step.
translated by 谷歌翻译
The human ear is generally universal, collectible, distinct, and permanent. Ear-based biometric recognition is a niche and recent approach that is being explored. For any ear-based biometric algorithm to perform well, ear detection and segmentation need to be accurately performed. While significant work has been done in existing literature for bounding boxes, a lack of approaches output a segmentation mask for ears. This paper trains and compares three newer models to the state-of-the-art MaskRCNN (ResNet 101 +FPN) model across four different datasets. The Average Precision (AP) scores reported show that the newer models outperform the state-of-the-art but no one model performs the best over multiple datasets.
translated by 谷歌翻译
随着网络攻击和网络间谍活动的增长,如今需要更好,更强大的入侵检测系统(IDS)的需求更加有必要。 ID的基本任务是在检测Internet的攻击方面充当第一道防线。随着入侵者的入侵策略变得越来越复杂且难以检测,研究人员已经开始应用新颖的机器学习(ML)技术来有效地检测入侵者,从而保留互联网用户对整个互联网网络安全的信息和整体信任。在过去的十年中,基于ML和深度学习(DL)架构的侵入检测技术的爆炸激增,这些架构在各种基于网络安全的数据集上,例如DARPA,KDDCUP'99,NSL-KDD,CAIDA,CAIDA,CTU--- 13,UNSW-NB15。在这项研究中,我们回顾了当代文献,并提供了对不同类型的入侵检测技术的全面调查,该技术将支持向量机(SVMS)算法作为分类器。我们仅专注于在网络安全中对两个最广泛使用的数据集进行评估的研究,即KDDCUP'99和NSL-KDD数据集。我们提供了每种方法的摘要,确定了SVMS分类器的作用以及研究中涉及的所有其他算法。此外,我们以表格形式对每种方法进行了批判性综述,突出了所调查的每种方法的性能指标,优势和局限性。
translated by 谷歌翻译
人工智能在学科和领域之间普遍存在,生物医学图像和信号处理也不例外。对该主题的增长和广泛的兴趣引发了一项巨大的研究活动,这反映在指数的研究工作中。通过研究大规模和多样化的生物医学数据,机器和深度学习模型彻底改变了各种任务,例如建模,分割,注册,分类和合成,并优于传统技术。但是,将结果转化为生物学/临床解释信息的困难是阻止其在现场的全部剥削。可解释的AI(XAI)试图通过提供使模型可解释并提供解释的手段来填补这一翻译差距。到目前为止,已经提出了不同的解决方案,并且正在增强社区的兴趣。本文旨在在生物医学数据处理中提供有关XAI的概述,并指出即将在2022年3月出现的IEEE Signal Processing杂志的生物医学图像和信号处理深度学习的特刊。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
传统的统计技术或元启发式学很难解决大多数现实世界的优化问题。主要困难与存在相当数量的局部Optima有关,这可能导致优化过程的过早收敛性。为了解决这个问题,我们提出了一种新型的启发式方法,用于构建原始功能的平滑替代模型。替代功能更容易优化,但保持原始坚固的健身景观的基本属性:全球最佳的位置。为了创建这样的替代模型,我们考虑通过自我调整健身函数增强的线性遗传编程方法。所提出的称为GP-FST-PSO替代模型的算法在搜索全局最优值和原始基准函数的视觉近似(在二维情况下)的视觉近似都可以达到令人满意的结果。
translated by 谷歌翻译
There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model - GatorTron - using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on 5 clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.
translated by 谷歌翻译